Goto

Collaborating Authors

 safety perception


Integrating Perceptions: A Human-Centered Physical Safety Model for Human-Robot Interaction

Pandey, Pranav, Parasuraman, Ramviyas, Doshi, Prashant

arXiv.org Artificial Intelligence

Ensuring safety in human-robot interaction (HRI) is essential to foster user trust and enable the broader adoption of robotic systems. Traditional safety models primarily rely on sensor-based measures, such as relative distance and velocity, to assess physical safety. However, these models often fail to capture subjective safety perceptions, which are shaped by individual traits and contextual factors. In this paper, we introduce and analyze a parameterized general safety model that bridges the gap between physical and perceived safety by incorporating a personalization parameter, $ρ$, into the safety measurement framework to account for individual differences in safety perception. Through a series of hypothesis-driven human-subject studies in a simulated rescue scenario, we investigate how emotional state, trust, and robot behavior influence perceived safety. Our results show that $ρ$ effectively captures meaningful individual differences, driven by affective responses, trust in task consistency, and clustering into distinct user types. Specifically, our findings confirm that predictable and consistent robot behavior as well as the elicitation of positive emotional states, significantly enhance perceived safety. Moreover, responses cluster into a small number of user types, supporting adaptive personalization based on shared safety models. Notably, participant role significantly shapes safety perception, and repeated exposure reduces perceived safety for participants in the casualty role, emphasizing the impact of physical interaction and experiential change. These findings highlight the importance of adaptive, human-centered safety models that integrate both psychological and behavioral dimensions, offering a pathway toward more trustworthy and effective HRI in safety-critical domains.


Public Perceptions of Autonomous Vehicles: A Survey of Pedestrians and Cyclists in Pittsburgh

Bedekar, Rudra Y.

arXiv.org Artificial Intelligence

--This study investigates how autonomous vehicle (A V) technology is perceived by pedestrians and bicyclists in Pittsburgh. Using survey data from over 1200 respondents, the research explores the interplay between demographics, A V interactions, infrastructural readiness, safety perceptions, and trust. Findings highlight demographic divides, infrastructure gaps, and the crucial role of communication and education in A V adoption. Autonomous vehicle (A V) integration into urban settings has sparked serious concerns about how these vehicles may affect vulnerable road users, especially pedestrians and cyclists. It is critical to comprehend the comfort, safety, and views of these road users as autonomous vehicles (A Vs) are tested and used more frequently in places like Pittsburgh. Sharing the road with autonomous vehicles poses special risks for pedestrians and cyclists because of their exposure and lack of physical protection. Among these issues are worries regarding A Vs' capacity to recognize and react to their motions, especially in situations with a lot of traffic or unpredictability. Furthermore, concerns and discomfort may be exacerbated by the inadequacy of the current urban infrastructure to facilitate the safe coexistence of A Vs and non-motorized users.


Urban Safety Perception Through the Lens of Large Multimodal Models: A Persona-based Approach

Beneduce, Ciro, Lepri, Bruno, Luca, Massimiliano

arXiv.org Artificial Intelligence

Understanding how urban environments are perceived in terms of safety is crucial for urban planning and policymaking. Traditional methods like surveys are limited by high cost, required time, and scalability issues. To overcome these challenges, this study introduces Large Multimodal Models (LMMs), specifically Llava 1.6 7B, as a novel approach to assess safety perceptions of urban spaces using street-view images. In addition, the research investigated how this task is affected by different socio-demographic perspectives, simulated by the model through Persona-based prompts. Without additional fine-tuning, the model achieved an average F1-score of 59.21% in classifying urban scenarios as safe or unsafe, identifying three key drivers of perceived unsafety: isolation, physical decay, and urban infrastructural challenges. Moreover, incorporating Persona-based prompts revealed significant variations in safety perceptions across the socio-demographic groups of age, gender, and nationality. Elder and female Personas consistently perceive higher levels of unsafety than younger or male Personas. Similarly, nationality-specific differences were evident in the proportion of unsafe classifications ranging from 19.71% in Singapore to 40.15% in Botswana. Notably, the model's default configuration aligned most closely with a middle-aged, male Persona. These findings highlight the potential of LMMs as a scalable and cost-effective alternative to traditional methods for urban safety perceptions. While the sensitivity of these models to socio-demographic factors underscores the need for thoughtful deployment, their ability to provide nuanced perspectives makes them a promising tool for AI-driven urban planning.


Insights on Disagreement Patterns in Multimodal Safety Perception across Diverse Rater Groups

Rastogi, Charvi, Teh, Tian Huey, Mishra, Pushkar, Patel, Roma, Ashwood, Zoe, Davani, Aida Mostafazadeh, Diaz, Mark, Paganini, Michela, Parrish, Alicia, Wang, Ding, Prabhakaran, Vinodkumar, Aroyo, Lora, Rieser, Verena

arXiv.org Artificial Intelligence

AI systems crucially rely on human ratings, but these ratings are often aggregated, obscuring the inherent diversity of perspectives in real-world phenomenon. This is particularly concerning when evaluating the safety of generative AI, where perceptions and associated harms can vary significantly across socio-cultural contexts. While recent research has studied the impact of demographic differences on annotating text, there is limited understanding of how these subjective variations affect multimodal safety in generative AI. To address this, we conduct a large-scale study employing highly-parallel safety ratings of about 1000 text-to-image (T2I) generations from a demographically diverse rater pool of 630 raters balanced across 30 intersectional groups across age, gender, and ethnicity. Our study shows that (1) there are significant differences across demographic groups (including intersectional groups) on how severe they assess the harm to be, and that these differences vary across different types of safety violations, (2) the diverse rater pool captures annotation patterns that are substantially different from expert raters trained on specific set of safety policies, and (3) the differences we observe in T2I safety are distinct from previously documented group level differences in text-based safety tasks. To further understand these varying perspectives, we conduct a qualitative analysis of the open-ended explanations provided by raters. This analysis reveals core differences into the reasons why different groups perceive harms in T2I generations. Our findings underscore the critical need for incorporating diverse perspectives into safety evaluation of generative AI ensuring these systems are truly inclusive and reflect the values of all users.


Navigating the new era of commerce: Exploring the relationship between anthropomorphism in voice assistants and user safety perception

AIHub

Image created by Guillermo Calahorra-Candao using ChatGPT. Prompt: "create an image of a person conversing with a virtual voice assistant (just like Alexa, Siri, or Google), while simultaneously wondering if the assistant might actually be human". In an era where technology continuously reshapes our daily interactions, the rise of virtual voice assistants (VAs) like Alexa, Google Home, and Siri represents a significant leap. Originally designed to enhance smartphone usability, these VAs have transcended their initial purpose, finding their way into various consumer devices and altering the user experience landscape. However, despite their widespread integration, a notable reluctance in adopting voice shopping persists, primarily due to concerns regarding safety.


Quantitative and Qualitative Evaluation of Reinforcement Learning Policies for Autonomous Vehicles

Ferrarotti, Laura, Luca, Massimiliano, Santin, Gabriele, Previati, Giorgio, Mastinu, Gianpiero, Campi, Elena, Uccello, Lorenzo, Albanese, Antonino, Zalaya, Praveen, Roccasalva, Alessandro, Lepri, Bruno

arXiv.org Artificial Intelligence

Optimizing traffic dynamics in an evolving transportation landscape is crucial, particularly in scenarios where autonomous vehicles (AVs) with varying levels of autonomy coexist with human-driven cars. This paper presents a novel approach to optimizing choices of AVs using Proximal Policy Optimization (PPO), a reinforcement learning algorithm. We learned a policy to minimize traffic jams (i.e., minimize the time to cross the scenario) and to minimize pollution in a roundabout in Milan, Italy. Through empirical analysis, we demonstrate that our approach can reduce time and pollution levels. Furthermore, we qualitatively evaluate the learned policy using a cutting-edge cockpit to assess its performance in near-real-world conditions. To gauge the practicality and acceptability of the policy, we conducted evaluations with human participants using the simulator, focusing on a range of metrics like traffic smoothness and safety perception. In general, our findings show that human-driven vehicles benefit from optimizing AVs dynamics. Also, participants in the study highlighted that the scenario with 80\% AVs is perceived as safer than the scenario with 20\%. The same result is obtained for traffic smoothness perception.


Evaluating the Perceived Safety of Urban City via Maximum Entropy Deep Inverse Reinforcement Learning

Wang, Yaxuan, Zeng, Zhixin, Zhao, Qijun

arXiv.org Artificial Intelligence

Inspired by expert evaluation policy for urban perception, we proposed a novel inverse reinforcement learning (IRL) based framework for predicting urban safety and recovering the corresponding reward function. We also presented a scalable state representation method to model the prediction problem as a Markov decision process (MDP) and use reinforcement learning (RL) to solve the problem. Additionally, we built a dataset called SmallCity based on the crowdsourcing method to conduct the research. As far as we know, this is the first time the IRL approach has been introduced to the urban safety perception and planning field to help experts quantitatively analyze perceptual features. Our results showed that IRL has promising prospects in this field. We will later open-source the crowdsourcing data collection site and the model proposed in this paper.